Look on my thesis, ye mighty: gaze interaction and social robotics

Presented by: Vidya Somashekarappa
Date: April 25, 2024

Cordially welcome to the public defence of Vidya Somashekarappa's doctoral thesis on Thursday 25 April, at 13:15 in room J330, Faculty of Humanities. The title is "Look on my thesis, ye mighty: gaze interaction and social robotics".

Doctoral candidate: Vidya Somashekarappa, University of Gothenburg

Supervisor: Asad Sayeed, University of Gothenburg

Assistant supervisor: Christine Howes, University of Gothenburg

Opponent: Maîtresse de conference/seniro lecturer Dominique Knutsen, Univerité de Lille

Committee: Professor Danielle Matthews, University of Sheffield, Associate Professor Alexandrs Berdicevskis, University of Gothenburg, Assistant Professor Emilia Barakova, Eindhoven University of Technology

Chair: Docent Eva-Marie Karin Bloom Ström, University of Gothenburg

Date: 2024-04-25

Title: Look on my thesis, ye mighty: gaze interaction and social robotics

Abstract: Gaze, a significant non-verbal social signal, conveys attentional cues and provides insight into others' intentions and future actions. The thesis examines the intricate aspects of gaze in human-human dyadic interaction, aiming to extract insights applicable to enhance multimodal human-agent dialogue. By annotating various types of gaze behavior alongside speech, the thesis explores the meaning of temporal patterns in gaze cues and their correlations. On the basis of leveraging a multimodal corpus of dyadic taste-testing interactions, the thesis further investigates the relationship between laughter, pragmatic functions, and accompanying gaze patterns. The findings reveal that laughter serves different pragmatic functions in association with distinct gaze patterns, underscoring the importance of laughter and gaze in multimodal meaning construction and coordination, relevant for designing human-like conversational agents. The thesis also proposes a novel approach to estimate gaze using a neural network architecture, considering dynamic patterns of real-world gaze behavior in natural interaction. The framework aims to facilitate responsive and intuitive interaction by enabling robots/avatars to communicate with humans using natural multimodal dialogue. This framework performs unified gaze detection, gaze-object prediction, and object-landmark heatmap generation. Evaluation on annotated datasets demonstrates superior performance compared to previous methods, with promising implications for implementing a contextualized gaze-tracking behavior in robotic interaction. Finally, the thesis investigates the impact of different gaze patterns from a robot on Human-Robot Interaction (HRI). The results suggest that manipulating robot gaze based on human-human interaction patterns positively influences user perceptions, enhancing anthropo- morphism and engagement.

Full text: here